Web Survey Bibliography
Can coders of responses generated by open-ended survey questions consistently capture the meaning provided by respondents to those surveys? This is the question of reliability and validity. Researchers who use content analysis typically think that coder inference is a threat to validity, that is, when coders are allowed to make inferences instead of automatically following a series of coding rules, those coders will introduce a wide variety of variation of meaning, thus preventing researchers from making strong claims for reliability and validity. But is that always the case? In this presentation I argue that coder inference is not only allowable in certain situations, it is required and highly desirable. My argument begins with an examination of the standards we use to make our evaluative judgments of reliability and validity. Then I challenge some of the assumptions we make when designing coding rules, when training coders, and when assessing reliability and validity.
Homepage (abstract)
Web survey bibliography - Conference on Optimal Coding of Open-Ended Survey Data, 2008 (8)
- CAQDAS, Secondary Analysis and the Coding of Survey Data; 2008; Fielding, N.
- Machines that Learn how to Code Open-Ended Survey Data: Underlying Principles, Experimental Data, and...; 2008; Sebastiani, F.
- Computer coding of 1992 ANES Like/Dislike and MIP responses; 2008; Fan, D. P.
- CATA (Computer Aided Text Analysis) Options for the Coding of Open-Ended Survey Data; 2008; Skalski, P.
- Classifying Open Occupation Descriptions in the Current Population Survey; 2008; Conrad, F. G., Couper, M. P.
- Coding Responses Generated by Open-Ended Questions: Meaning Matching or Meaning Inference?; 2008; Potter, Ji.
- Open-ended questions and text analysis; 2008; Popping, R.
- Coding Verbal Data - What to Optimize?; 2008; Krippendorff, K.